home Today's News Magazine Archives Vendor Guide 2001 Search isdmag.com

Editorial
Today's News
News Archives
On-line Articles
Current Issue
Magazine Archives
Subscribe to ISD


Directories:
Vendor Guide 2001
Advertiser Index
Event Calendar


Resources:
Resources and Seminars
Special Sections


Information:
2001 Media Kit
About isdmag.com
Writers Wanted!
Search isdmag.com
Contact Us









BLOCKQUOTE>

Testing IP Cores

A comprehensive DFT strategy for testing IP.

By Mark Olen and Janusz Rajski


To gain a competitive advantage in today's demanding electronics market, system designers try continually to add new functionality, increase performance, and/or reduce costs in each new development cycle, while beating competitors to market. To achieve these often conflicting goals, designers strive to exploit increases in semiconductor process technology defined by Moore's Law, which observes that the number of available transistors on a silicon substrate doubles every eighteen months. While the Semiconductor Industry Association (SIA) technology roadmap predicts a continuation of this trend into the future, electronic design automation tools and design techniques have tended to lag behind, creating a technology gap that has forced designers to hunt for new methodologies.

Higher density and integration offer the improved performance, better functionality, and lower cost sought by the system designer. As the number of logic gates increases, an ASIC or IC changes its characteristics -- it takes on more and more system functions and becomes a de facto system-on-silicon. Very often, the functionality of these systems is determined by the embedded software. This evolution in ASIC design has forced changes in design methodologies to address system design, analysis, and verification.

Design Reuse and IP Cores Recently, System-on-silicon designers have begun to turn from traditional synthesis-based to block-based design methodologies. An obvious benefit of block-based design is reduced time-to-functionality through block reuse. Systems designers now leverage previously designed circuits including controllers, interfaces, memory arrays, DSPs, or virtually any other type of functions. Commonly called cores, these reusable blocks have given rise to an Intellectual Property (IP) industry, where IP providers amass libraries of functionality for sale to systems designers.

IP can take several forms: soft, firm, or hard. Soft cores are defined in synthesizable RTL or a netlist of library elements. The systems designer is responsible for synthesizing to gates, and optimizing the design all the way through layout. Soft cores are the most flexible and portable, but they are also the least predictable. The final core parameters such as performance, area, and power depend on the system designer's skill, the flow, and tools used in the design process.

Firm cores are structurally optimized for performance and area using a list of process technologies. They are more predictable than soft cores but not tied to a specific process. They are also more flexible and portable than hard cores, but they do not guarantee the same level of performance. In contrast, hard cores come as a layout for a specific process technology optimized for performance, power, and area. Hard cores are highly predictable but not portable or flexible.

While the systems design team realizes tremendous productivity gains through block-based design techniques, the manufacturing and test team faces new challenges. How do they ensure high quality and reliability of the block-based designs without suffering time-to-market delays critical to the overall success of the product? Remember that the original goal of the product development process is to add functionality, increase performance, and/or reduce cost, while beating competitors to market.

IP Testing Alternatives In system-on-silicon-based designs, testing has become one of the key elements in determining whether or not a product will be successful. The importance of manufacturing test is obvious if you consider the staggering complexity of these devices. With up to 60 million transistors available on a single silicon substrate, and new failure mechanisms caused by smaller geometries, a high-coverage test program is critical to ensure that only good products are shipped to customers.

Testing block-based designs shares some common needs with synthesis-based designs. For instance, it is nearly impossible to develop a cost-effective test solution after a block or ASIC is designed. Retroactive test strategies tend to handcuff test engineers, causing delays in the overall product development cycle. But if a test strategy is considered early in the development process, then minor design methodology adjustments can provide significant impact on the test development process.

Design-for-test (DFT) provides a sliding scale of opportunity for ASIC as well as IP development teams. A complete internal scan solution complemented by boundary-scan provides a near turn-key test generation process, virtually eliminating test preparation time. In one case, a California-based electronics company was able to reduce its test program generation time for a particular ASIC from several weeks to just three days by employing this full-scan DFT methodology. Even when using a partial-scan methodology, design teams can still dramatically reduce the time spent in test development, while imposing minimal impact on a design's performance, area, and design rules.

Another widely accepted ASIC test issue that also applies to IP is the lack of a universal DFT methodology that applies to each and every design or development process. Different types of blocks vary in functionality and performance, and therefore respond differently to different test strategies. It is important to tailor a test strategy to match the specific characteristics of a design, as there is no "one-size-fits-all" DFT approach. For example, control-intensive blocks respond better to a full-scan approach, and in some cases logic built-in self-test (BIST), whereas high-performance datapath blocks are more likely candidates for a partial-scan solution. As more memory arrays are implemented in System-on-silicon designs, memory BIST is also becoming attractive.

Beyond these shared DFT concerns, testing block-based designs has some unique issues that are not as evident in synthesis-based designs. By proliferating the use of externally developed blocks, cores, macros, and arrays, development teams have now spread the design knowledge database across a much broader group. In the past, test engineers only had to look as far as their design team to learn about an ASIC's functionality and specifications.

The dynamic, however, between the IP design team and the end-user test team is totally different. Now that significant blocks of functionality are conceived, developed, and implemented externally, the immediate source of design data has been removed. Where do the system designer and test engineer go to get this data? Although IP is typically available with behavioral functional specifications, an IP provider is generally reluctant to release much information about its designs' implementation for fear of piracy. But this is the exact data needed to construct a workable test strategy.

The situation is even more extreme in the case of hard cores. With this type of IP, the system designer or test engineer does not even have the option of changing the design and introducing DFT, because the netlist is not available at this stage and the core is in layout form.

Realistic Test For IP Cores The most fundamental issue for IP is who is responsible for testing IP--the IP provider, the systems designer, or the systems test engineer? At first glance, a solution for IP testing seems to point back to the IP designer. Even for soft and firm cores--where the system designer is responsible for implementing the core starting from a synthesizable netlist--the IP core provider still has to demonstrate design for testability on a reference technology to maximize the predictability of testability.

With hard cores, it appears that the IP designer must insert the testing into the blocks and cores, since hard cores are fixed and encrypted. But, what type of test strategy should they employ? The answer depends heavily upon how each block or core is implemented in the end system. The system designer is primarily responsible for assembly of the whole testable system, provision of access to DFT features incorporated in the cores, core isolation, testing of glue logic, etc. So how can an IP provider possibly predict how its cores and blocks will be implemented across hundreds of possible designs? From a business point of view, it simply is not economically feasible for an IP provider to provide multiple versions of each and every core just to accommodate different test methodologies.

For example, a Texas-based electronics company developing blocks and cores for IP is considering offering each block and core with several testability options selectable by the systems designer, depending on how each core is implemented. If the core is accessible from external pins, a boundary scan implementation plus a set of ready-made test vectors is the recommended test strategy and it will be shipped with the core. If the core is not accessible from external pins, but a few test pins can be multiplexed to the I/O, then boundary scan is scrapped because of its expense and the functional test vectors are no longer needed. Instead, a nearly-full-scan test solution, which requires only external multiplexed access to four or five pins, is recommended. If the core is not externally accessible at all, then a combination of internal scan and BIST will be provided. It's easy to see how this approach can get extremely confusing very quickly.

Alternative IP Core Testing Fortunately, there are some realistic test solutions for blocks and cores. While it is impractical to expect IP Providers to predict every possible scenario into which each core and block will be placed, it is reasonable that the IP Provider implement a "Test-Ready" approach to make the system designer's task achievable. A Test-Ready IP approach would present cores and blocks prepared with DFT in mind, but would not lock the system designer or test engineer into a specific implementation that made no sense for their particular application.

For soft cores, IP can be simply prepared with test synthesis script files and model libraries. This leaves the specific choice of DFT methodology up to the system designer, including full or partial scan, boundary-scan, and/or BIST. For hard cores, IP can be designed with testability in mind, and prepared for optimal scan control or BIST insertion by the system designer as well. This technique allows the system designer to share common test circuitry, and thereby reduce the overall impact on the design without affecting the performance or functionality of any core.

Scan As A DFT Solution For legacy cores designed many years ago with ad hoc or partial scan DFT, there isn't much choice. Those cores may come with precomputed test vectors which should be applied to the core inputs and measured on the core outputs. In this case, there are two possible solutions: multiplexing and scan isolation.

The first solution uses multiplexers to provide direct access from the chip pins to the core inputs and outputs. The precomputed patterns can be applied directly to the core and its outputs can be observed on the chip pins. This solution, although simple, has some severe limitations. If the number of core inputs and outputs is bigger than the number of inputs and outputs of the chip, this solution cannot be applied directly without time multiplexing. In addition, it requires a considerable amount of signal routing.

The other solution for legacy IP, scan isolation, uses isolation rings in the form of scan or boundary scan to provide access to the core, similar to scan techniques for testing chips at the board level. The vectors are shifted through the boundary scan and the internal scan is applied to the circuit. The responses are then loaded back and shifted out for comparison. The main problem is storage of test data and long test application time. For example, if a one-million-gate design requires 1 Gbyte of data, the limited bandwidth of the boundary scan channel could require testing times in excess of one minute--an eternity for manufacturing test. Naturally, the problem gets worse as system complexity grows. Soon there is too much test data moving through too-slow channels.

For high-performance cores, such as datapath and pipeline architectures, a partial-scan approach is also the best alternative. By scanning the first and last stages of a pipeline, advanced scan-based automatic test pattern generators (ATPG) can quickly and efficiently provide high-coverage test patterns. For applications which are highly sensitive in area and power, such as some consumer and medical electronics products, again a partial-scan approach offers the least invasive DFT alternative. Even for control logic in some applications, a full internal-scan approach offers an optimal blend of testability without the added overhead of BIST.

BIST As A DFT Solution BIST is rapidly emerging as a robust alternative for new IP designs. It is a design-for-testability methodology where testing is accomplished by built-in hardware and software features. An ASIC performs BIST by generating test patterns and evaluating test responses on chip.

What is remarkable about BIST is that it embeds test into the circuit. As a result, the amount of test data stored and transferred across the chip boundary is insignificant. Consequently, I/O throughput is not a problem for BIST, even for extremely dense ASICs. Moreover, the test application time is not limited by the throughput. Most cores can be tested in parallel in an ASIC, as long as the testing does not exceed the device's maximum power consumption and heat dissipation requirements. Since BIST reuses test capabilities analogous to design reuse, it offers similar advantages--the reduction of product development cycle--while reducing the cost of manufacturing testing.

As a result, BIST is an extremely attractive component of an IP DFT strategy. In fact, as far as testing embedded memories is concerned, there are no alternatives: either the memories are tested by BIST or they are not tested at all. BIST for embedded memories implements algorithmic patterns that have theoretically proven coverage. The test usually involves march sequences that traverse the whole address space, accessing each memory and performing read and write operations.

For logic designs, BIST has much to offer. Here a simple circuit called linear feedback shift register (LFSR) generates BIST pseudo-random patterns. Responses are compacted into a short statistic by calculating a signature. Then the signature obtained from the circuit under test is compared with a fault-free signature. Logic BIST is usually combined with structured DFT like scan, because in most circuits, pseudo-random-pattern covers 70-90 percent of faults within a reasonable time. The remaining faults are random-pattern resistant. Test points are introduced into the circuit to ensure 98- to 99-percent fault coverage.

BIST For IP Cores--The Requirements Two big questions face the IP industry today with regard to BIST. Should BIST for IP core- based designs be different from the logic or memory BIST available today? Also, how should the BIST test responsibilities be divided between the IP provider and the system designer?

To answer these questions, it is important to remember that the system-on-silicon design process has two distinct phases. The IP providers design cores and then the system designers builds the whole system using these cores, but with very limited information about the core itself. With this in mind, an optimized solution to BIST of systems-on- silicon with IP cores should meet the following requirements:

     
Protection of intellectual property
--In many cases, IP      
     providers of hard cores cannot make the netlist information 
     available to the users. The BIST solution
 should work with the 
     netlist information.

     
Low impact on device overhead
--As is the case with any   
     other design for testability, the objective is to minimize the active 
     area and routing of test signals. Also the impact on the design 
     performance should be as small as possible.

     
Simplified integration of cores into a BIST system
--There 
     should be minimal knowledge and minimal processing required to 
     assemble IP cores into a system with BIST; however,
 the 
     testability of the cores within the system should still be very 
     precisely predicable.

     
Compatibility with design reuse
--The solution should be 
     modular and reusable, so a design for BIST can be reused multiple 
     times.

DFT Solutions For IP One possible solution to meet the unique requirements of BIST and IP is to put all the responsibility on the IP Provider. For each core they design, the provider must provide a full-scan solution complete with BIST collar of an LFSR, MISR, and controller. But this approach creates a major drawback. Let's say a customer purchases five cores to be used on a system. Just to test the cores requires five LFSRs, five MISRs, five BIST controllers, along with all the multiplexing and interconnect circuitry. Then, the customer must interface all of this to a boundary-scan TAP controller to create a testable system-on-silicon. This solution meets all requirements except one important criteria--the minimum area. Most system designers, especially for small- and medium-sized cores, find the repetition of BIST controllers and the area they consume an unacceptable solution.

A better alternative is to use a central BIST controller that can test many cores at once. In this approach, the design process has two phases: (1) the design of BIST-ready cores, and (2) system assembly. The IP provider creates and ships BIST-ready cores. For each hard core, the provider implements a nearly full-scan solution, inserts test points, and provides a test specification file that describes the reduced test structure or behavior. For each soft-core, the provider includes script files, models, and simulation data needed by the systems designer to implement his or her own SCAN or BIST.

With this approach, IP customers can create an optimized test solution that shares a single LFSR, MISR, and BIST controller across all cores, to be executed in series. This centralized solution controls combinations of BIST, internal SCAN, and boundary SCAN tested cores, depending on each core and how it is implemented. The result is reduced area overhead and less intrusion. More importantly, this solution gives the customer a say in how the test strategy is implemented. Every core in the system is tested by exactly the same set of patterns, thereby guaranteeing high predictability of test quality.

IP and DFT With today's technology, it is possible to build very large systems containing many embedded cores on a single piece of silicon. Because of the size and complexity of these chips and their reliance on IP, DFT will play a major role in determining the overall success of these system-on-silicon. A complete DFT strategy will combine multiple techniques, each tailored to the specific testability requirements of components in the system itself, thus combining the benefits of several technologies including internal-scan, boundary-scan, and BIST.

Mark Olen is the product line manager at Mentor Graphics' Design-For-Test group.
Janusz Rajski is chief scientist for Mentor Graphics' Design-For Test group.


Sponsor Links

All material on this site Copyright © 2001 CMP Media Inc. All rights reserved.